Device-reserved Memory as an Eviction-based File Cache
نویسندگان
چکیده
In resource constrained embedded systems, memory reservation, which provides physically contiguous memory allocation to devices (e.g., video decoder and camera), decreases the memory efficiency of the system. The dedicated use of reserved memory by an owner device makes itself more under-utilized when the device is less frequently used. Previous approaches support on-demand reservation to relax the dedicated use of the reserved memory by allowing the kernel to exploit the reserved memory while the owner device is idle. The previous approaches, however, can either incur significant delay to the on-demand reservation or sacrifice the memory efficiency during the on-demand reservation. Due to the delay, an end user could wait for tens of seconds when to use the owner device-dependent functions. When the memory efficiency is compromised, the system will likely incur additional read I/Os. In this paper, we propose a scheme to use device-reserved memory as an eviction-based file cache called eCache. The eCache provides on-demand reservation by discarding the cached data in a contiguous memory region in eCache. From the nature of eviction-based placement policy, the memory efficiency is still preserved during the on-demand reservation because the cached data in eCache is always less important than those in an upper-level cache such as in-kernel page cache. In addition, the on-demand reservation only requires minimal time discarding cached data in eCache. When multiple reserved regions comprise the eCache, cost-based region selection improves the memory efficiency (or caching efficiency) in eCache when on-demand reservation occurs. We implemented eCache on the Nexus S smartphone and evaluated in Android workloads. The evaluation results show Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$15.00. that the 21% of additional memory reduces read I/Os by a factor of from two to six as compared to those with memory reservation approach. The application launch performance is also improved by at most 16%. The on-demand reservation time is reduced to a few milliseconds.
منابع مشابه
A Per-File Partitioned Page Cache
In this paper we describe a new design of the operating system page cache. Page caches form an important part of the memory hierarchy and are used to access file-system data. In most operating systems, there exists a single page cache whose contents are replaced according to a LRU eviction policy. We design and implement a page cache which is partitioned by file—the per-file page cache. The per...
متن کاملThe Effects of Memory-Rich Environments on File System Microbenchmarks
File system performance has been greatly influenced by disk caching mechanisms. As the size of memory increases, common workloads are more likely to run completely from memory, and the effects of L2 caching and underlying hardware are becoming more visible. This paper investigates performance anomalies observed when measuring and comparing the memory performance of various leading file systems....
متن کاملSoft Updates Made Simple and Fast on Non-volatile Memory
Fast, byte-addressable NVM promises near cache latency and near memory bus throughput for file system operations. However, unanticipated cache line eviction may lead to disordered metadata update and thus existing NVM file systems (NVMFS) use synchronous cache flushes to ensure consistency, which extends critical path latency. In this paper, we revisit soft updates, an intriguing idea that elim...
متن کاملSynergy: A Hypervisor Managed Holistic Caching System
Efficient system-wide memory management is an important challenge for over-commitment based hosting in virtualized systems. Due to the limitation of memory domains considered for sharing, current deduplication solutions simply cannot achieve system-wide deduplication. Popular memory management techniques like sharing and ballooning enable important memory usage optimizations individually. Howev...
متن کاملEviction-based Cache Placement for Storage Caches
Most previous work on buffer cache management uses an access-based placement policy that places a data block into a buffer cache at the block’s access time. This paper presents an eviction-based placement policy for a storage cache that usually sits in the lower level of a multi-level buffer cache hierarchy and thereby has different access patterns from upper levels. The main idea of the evicti...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2012